Images of the practical work can be found on: https://perso.telecom-paristech.fr/nicolas/MVA/SEANCE14fevrier/images/
You have:
Codes of the practical work are available here: https://perso.telecom-paristech.fr/nicolas/MVA/SEANCE14fevrier/
Some useful functions are available in the file mvalab.py.
A useful function to read the images with Télécom-Paristech format is imz2mat
import math
import numpy as npy
import matplotlib.pyplot as plt
import scipy
from scipy import signal
import numpy
import mvalab
import filtrage as correctionfiltrage
tagnotebook=1
mvalab.notebook(tagnotebook)
url='https://perso.telecom-paristech.fr/nicolas/MVA/SEANCE14fevrier/images/'
imagesar=mvalab.imz2mat(url+'TSX_GC_maitre.cxs')
tableauimage=imagesar[0]
ncol=imagesar[1]
nlig=imagesar[2]
The function to display a SAR image with an adapted thresholding of the values is visusar. the first argument containing the image is compulsary, and the second one is optional and corresponds to a threshold for the display. The value $\alpha$ defines the threshold in this way :
${\rm thres} \:\:=\:\: {\rm vmean} \:\: + \:\: \alpha \: {\rm std}$
vmean is the mean value and std the standard deviation computed on the whole image. Every pixel whose value is above this threshold is displayed with the maximal value.
When $\alpha$ is 0, the image is displayed with the full dynamic range. A usual value to display SAR images is 3.
Use visuinterfero on the phase (do not forget the ``angle'' function).
To visualize an image and its spectrum (with autmatic thresholding for the display), use the function visusarspectre.
mvalab.visusar(tableauimage)
mvalab.visuinterfero(npy.angle(tableauimage),0)
mvalab.visusarspectre(tableauimage)
These are free data provided by Airbus of Grand Canyon, Colorado, US. The parameters of the sensor for this acquisition are the following:
What is the size of a ground range cell ?
$\textbf{Size of ground range cell: } \frac{c}{2 F_e \sin \theta}$
These values will be used in the following to compute the contribution of the phase ramp corresponding to the orbital fringes.
The two images to consider are the following:
url='https://perso.telecom-paristech.fr/nicolas/MVA/SEANCE14fevrier/images/'
imamaitre=mvalab.imz2mat(url+'TSX_GC_maitre.cxs')
imaslave = mvalab.imz2mat(url+'TSX_GC_slave.cxs')
mvalab.visusar(imamaitre[0])
mvalab.visusar(imaslave[0])
These images are complex images. They are given in their original geometry of acquisition. Thus, they are not directly super-imposable. You can flicker them using mvalab.visuflicker (you have to use spyder) on a chosen window: mvalab.visuflicker(imamaitre[0][200:400,200:400],imaslave[0][200:400,200:400])
To do a registration at the pixel scale, you can find the maximum of the correlation between the two amplitude images. This is done in the Fourier domain by computing the hermitian product of the two Fourier transforms. Then the inverse Fourier tansform gives the correlation image.
Compute the translation to apply and give its value. Then compute the interferogram by computing the hermitian product between the two registered images (pixel by pixel, 1-look interferogram). What do you see ?
ima1fft=npy.fft.fft2(npy.abs(imamaitre[0]))
ima2fft=npy.fft.fft2(npy.abs(imaslave[0]))
imacorfft = npy.multiply(ima1fft, npy.conjugate(ima2fft))
imacor=npy.fft.ifft2(imacorfft)
mvalab.visusar(imacor)
idecalage=npy.unravel_index(npy.argmax(npy.abs(imacor)), imacor.shape)
#
imaslaveroll=npy.roll(imaslave[0], idecalage[0], axis=0)
imaslaveroll=npy.roll(imaslaveroll, idecalage[1], axis=1)
interfero=npy.multiply(imamaitre[0], npy.conj(imaslaveroll) )
mvalab.visuinterfero(npy.angle(interfero)+math.pi,0)
mvalab.visusarspectre(imamaitre[0])
The two images are almost registered but in practice the ground cells are slightly different since they have not been taken under the exact same incidence angle.
The formula giving the ground cell size are:
$\Delta x_M = \frac{c}{2 \: F_e \: \sin \theta}$
and
$\Delta x_S = \frac{c}{2 \: F_e \: \sin \left(\theta + \delta \theta\right)}$
$\delta \theta$ being known thanks $B_{ortho}$ and $R$.
We know that $sin(\delta \theta) \approx \delta \theta$ since $\delta \theta$ is small
What do you think of the ground cells difference ? What is the value ?
What is the ratio with the "ground wavelength" $\lambda_{sol}=\frac{\lambda}{\sin \theta}$ ?
What is the corresponding range difference in the line of sight ? To which phase shift does it correspond (taking into account the round-trip of the wave) $\delta \phi = \frac{4\pi \delta R}{\lambda}$?
This phase shift corresponds to one pixel, what is the global phase shift between near range and far range for the whole image ? Check on the previous interferometric phase image that the number of fringes is in accordance with your value.
To take into account this phase shift for each pixel, apply a phase shift, dephasage being the whole phase shift for the number of pixels in range of the image.
rampephase=npy.linspace(0.,dephasage,ncolonnes)
malignephase=npy.exp(1jrampephase)
imaslavefine=npy.multiply(imaslaveroll,malignephase)
$\textbf{Computation:}$
First we note that by the small-angle approximation, $\sin \delta \theta \approx \delta \theta = \frac{B_{ortho}}{R}$. We can then compute $\Delta x_M - \Delta x_S = \frac{c}{2 F_e \sin \theta} - \frac{c}{2 F_e \sin (\theta + \delta \theta)}$ by bringing the terms into a single fraction with a common denominator: $\frac{c}{2F_e} \frac{\sin(\theta + \delta \theta) - \sin \theta}{\sin \theta \cdot \sin(\theta + \delta \theta)}$. We use two approximations, the first-order Taylor expansion $\sin(\theta + \delta \theta) \approx \sin(\theta) + \delta \theta \cos(\theta)$ on the numerator, and the simple recognition that $\delta \theta$ is small so that in the denominator $\sin (\theta + \delta \theta) \sin(\theta) \approx \sin^2(\theta)$ (since the angle is 30 something degrees and the deviation is tiny this changes the value little). Then we have that the ground cell difference is $\Delta x = \frac{c}{2F_e} \frac{\delta \theta}{\sin \theta \cdot \tan \theta}$.
This produces a range difference of $\Delta r = \Delta x \sin \theta \approx \frac{c}{2F_e} \frac{\delta \theta}{\tan \theta}$.
Thus the phase shift is $\delta \phi = \frac{2\pi}{\lambda} \frac{c}{F_e} \frac{\delta \theta}{\tan \theta}$.
c = 299792458
Fe = 109890000
Borth = 46.5
theta = 39.15*(2*math.pi)/360.0
R = 646000
delta_theta = Borth/R
lambda_ = 0.0311
# without the approximations
delta_sol = (c/(2*Fe)) * (npy.cos(theta+delta_theta)*delta_theta)/(npy.sin(theta+delta_theta)*npy.sin(theta))
delta_range = delta_sol*npy.sin(theta)
delta_phi = 4*math.pi*delta_range/lambda_
print("without approximations: ", delta_phi*2048)
delta_phi_approx_per_cell = (4*math.pi*c*delta_theta)/(lambda_*2*Fe*npy.tan(theta))
print("with approximations: ", delta_phi_approx_per_cell*2048)
dephasage = 99.671602
ncolonnes=imamaitre[1]
rampephase=npy.linspace(0.,dephasage,ncolonnes)
malignephase=npy.exp(1j*rampephase)
imaslavefine=npy.multiply(imaslaveroll,malignephase)
To compute now the interferometric product, you have to compute the multi-look hermitian product given by the formula :
$d(i,j)= {\gamma(i,j) e}^{j \phi(i,j)}$
$d(i,j)=\frac{\sum_{(i',j') \in {{V}}(i,j)} I_m(i',j')I_s^*(i',j')} { \sqrt{\sum_{(i',j') \in {{V}}(i,j)} I_m(i',j')I_m^*(i',j')} \sqrt{\sum_{(i',j') \in {{V}}(i,j)} I_s(i',j')I_s^*(i',j')} } $
the usual neighborhood is a square of size 3$\times$3 pixels. It gives :
Compute and compare the interferometric product for the two images (master and slave), with and without the sub-pixellic phase shift.
You can fill the python program filtrage.py to compute this fuction.
import filtrage
# you have to complete filtrage to do the complex multi-looking
# The image after alignment via correlation/match filter
interf=filtrage.interferogramme(imamaitre[0], imaslaveroll)
mvalab.visusar(npy.abs(interf))
mvalab.visuinterfero(npy.angle(interf),0)
# subpixel alignment
interfnoramp=filtrage.interferogramme(imamaitre[0], imaslavefine)
mvalab.visusar(npy.abs(interfnoramp))
mvalab.visuinterfero(npy.angle(interfnoramp),0)
A direct re-sampling of the slave image in the master geometry is also possible. In this case, the size of the cells is exactly the same as the master image.
Download the image TSX_GC_slave_recalee and compute the interferogram as previously. What do you observe ?
#processing of the registered image
imaslave_recalee = mvalab.imz2mat(url+'TSX_GC_slave_recalee.cxs')
mvalab.visusar(imaslave_recalee[0])
interfrecal=filtrage.interferogramme(imamaitre[0], imaslave_recalee[0])
mvalab.visusar(npy.abs(interfrecal))
mvalab.visuinterfero(npy.angle(interfrecal),0)
hamb = lambda_*R*npy.sin(theta)*0.5/Borth
print("h_amb", hamb)
print("height", 5*hamb)
When taking into account the phase shift due to pixel size differences, what do the left fringes correspond to ?
$\textbf{Fringes:}$ Each fringe (complete color cycle) corresponds to an elevation change of $h_{amb}$ the height difference corresponding to a full cycle of $2 \pi$ in phase shift.
Compute the ambiguity height given by :
$h_{amb} = \frac{\lambda R \: \sin \theta}{ 2 B_{ortho} }$
Which mean depth of the Grand Canyon can be deduced from the interferogram ? Explain.
$\textbf{Height:}$ We see about 5 fringes so we can estimate the Grand Canyon's height at $5 \cdot h_{amb}$. This is computed above as about 136 m for the h_amb and 682 m for the total height
The following parameters have to be used for these data:
Do the same processing as before with these new data.
What is the pixel phase shift in this case ?
What is the ambiguity height ?
Which altitude do you find for Etna mount using your interferogram ? (explain)
Which are the areas of low coherence ? Explain why.
$\textbf{Calculations: }$ Dephasage: approx 652 pixels. Ambiguity height: approx 147 m. Height of the mountain: approximately 20 fringes (perhaps a few more -- it's hard to see near the top of the mountain as the ambiguity height is attained so quickly due to the steepness). Thus the height of the mountain is very roughly 3000 m (perhaps + a couple multiples of 147 m). This is assuming the body of water is at sea level (see below).
$\textbf{Low coherence: }$ The areas of low coherence are the areas with poor fringe quality, i.e. the area near the top of the peak and the water body to the right of the scene. Coherence is a sort of measure of the match between the two images. It's the expected value of the first image times the conjugate of the second, normalized. When the phases are pretty random, the numerator of the coherence will be the product of the two images' magnitudes times $exp($the phase difference), which is varying randomly over a $2 \pi$ interval, producing positive contributions here and negative contributions there. Thus the expected value -- the coherence -- is low. This happens, for instance, in bodies of water, which contain ripples or which fail to reflect much of the signal back at the satellite: what one observes is noise. This can also happen on very steep and jaggedy cliff faces near a mountaintop. The fringes are small here because the mountain is so steep the ambiguity height is attained within very few pixels. Near the very top the SNR is low though perhaps not as low as it is over the worst part of the water.
c = 299792458
Fe = 18960000
Borth = -54.5
theta = 20*(2*math.pi)/360.0
R = 826000
delta_theta = Borth/R
lambda_ = 0.0566
delta_sol = (c/(2*Fe)) * (npy.cos(theta+delta_theta)*delta_theta)/(npy.sin(theta+delta_theta)*npy.sin(theta))
delta_range = delta_sol*npy.sin(theta)
delta_phi = 4*math.pi*delta_range/lambda_
print("dephasage without approximations: ", delta_phi*2048)
delta_phi_approx_per_cell = (4*math.pi*c*delta_theta)/(lambda_*2*Fe*npy.tan(theta))
print("dephasage with approximations: ", delta_phi_approx_per_cell*2048)
hamb = lambda_*R*npy.sin(theta)*0.5/Borth
print("h_amb: ", hamb)
print("height: ", 20*hamb)
# use the previous steps to process Etna data
repertoireMAT='/cal/homes/nicolas/public_html/MVA/SEANCE14fevrier/'
import os
repertoireMAT = os.getcwd() + "/etna/"
print(repertoireMAT)
imageM='Master.mat'
imageS='Slave.mat'
imamaitre=mvalab.matlab2imz('/home/maxdunitz/Desktop/MVA/remotesensing/etna/Master.mat','Master')
imaslave=mvalab.matlab2imz('/home/maxdunitz/Desktop/MVA/remotesensing/etna/'+imageS,'Slave')
ncolonnes=imamaitre[1]
nlignes=imamaitre[2]
#%%
mvalab.visusar(imamaitre[0])
mvalab.visusar(imaslave[0])
# correlation/matched filter for pixel alignment
ima1fft=npy.fft.fft2(npy.abs(imamaitre[0]))
ima2fft=npy.fft.fft2(npy.abs(imaslave[0]))
imacorfft = npy.multiply(ima1fft, npy.conjugate(ima2fft))
imacor=npy.fft.ifft2(imacorfft)
mvalab.visusar(imacor)
idecalage=npy.unravel_index(npy.argmax(npy.abs(imacor)), imacor.shape)
#
imaslaveroll=npy.roll(imaslave[0], idecalage[0], axis=0)
imaslaveroll=npy.roll(imaslaveroll, idecalage[1], axis=1)
interfero=npy.multiply(imamaitre[0], npy.conj(imaslaveroll) )
mvalab.visuinterfero(npy.angle(interfero)+math.pi,0)
mvalab.visusarspectre(imamaitre[0])
# computed dephasage from above cell
dephasage = delta_phi*ncolonnes
# subpixel phase alignment
rampephase=npy.linspace(0.,dephasage,ncolonnes)
malignephase=npy.exp(1j*rampephase)
imaslavefine=npy.multiply(imaslaveroll,malignephase)
# interferrogram from the rough (correlation/matched/pixelwise) alignment
interf=filtrage.interferogramme(imamaitre[0], imaslaveroll)
mvalab.visusar(npy.abs(interf))
mvalab.visuinterfero(npy.angle(interf),0)
# interferrogram from the fine alignment
interf=filtrage.interferogramme(imamaitre[0], imaslavefine)
mvalab.visusar(npy.abs(interf))
mvalab.visuinterfero(npy.angle(interf),0)